10 research outputs found

    A Novel Real-Time Edge-Cloud Big Data Management and Analytics Framework for Smart Cities

    Get PDF
    Exposing city information to dynamic, distributed, powerful, scalable, and user-friendly big data systems is expected to enable the implementation of a wide range of new opportunities; however, the size, heterogeneity and geographical dispersion of data often makes it difficult to combine, analyze and consume them in a single system. In the context of the H2020 CLASS project, we describe an innovative framework aiming to facilitate the design of advanced big-data analytics workflows. The proposal covers the whole compute continuum, from edge to cloud, and relies on a well-organized distributed infrastructure exploiting: a) edge solutions with advanced computer vision technologies enabling the real-time generation of “rich” data from a vast array of sensor types; b) cloud data management techniques offering efficient storage, real-time querying and updating of the high-frequency incoming data at different granularity levels. We specifically focus on obstacle detection and tracking for edge processing, and consider a traffic density monitoring application, with hierarchical data aggregation features for cloud processing; the discussed techniques will constitute the groundwork enabling many further services. The tests are performed on the real use-case of the Modena Automotive Smart Area (MASA)

    Model-Based Underwater 6D Pose Estimation from RGB

    Full text link
    Object pose estimation underwater allows an autonomous system to perform tracking and intervention tasks. Nonetheless, underwater target pose estimation is remarkably challenging due to, among many factors, limited visibility, light scattering, cluttered environments, and constantly varying water conditions. An approach is to employ sonar or laser sensing to acquire 3D data, but besides being costly, the resulting data is normally noisy. For this reason, the community has focused on extracting pose estimates from RGB input. However, the literature is scarce and exhibits low detection accuracy. In this work, we propose an approach consisting of a 2D object detection and a 6D pose estimation that reliably obtains object poses in different underwater scenarios. To test our pipeline, we collect and make available a dataset of 4 objects in 10 different real scenes with annotations for object detection and pose estimation. We test our proposal in real and synthetic settings and compare its performance with similar end-to-end methodologies for 6D object pose estimation. Our dataset contains some challenging objects with symmetrical shapes and poor texture. Regardless of such object characteristics, our proposed method outperforms stat-of-the-art pose accuracy by ~8%. We finally demonstrate the reliability of our pose estimation pipeline by doing experiments with an underwater manipulation in a reaching task.Comment: Under RA-L Submissio

    Motion Planning and Control for Multi Vehicle Autonomous Racing at High Speeds

    Full text link
    This paper presents a multi-layer motion planning and control architecture for autonomous racing, capable of avoiding static obstacles, performing active overtakes, and reaching velocities above 75 m/sm/s. The used offline global trajectory generation and the online model predictive controller are highly based on optimization and dynamic models of the vehicle, where the tires and camber effects are represented in an extended version of the basic Pacejka Magic Formula. The proposed single-track model is identified and validated using multi-body motorsport libraries which allow simulating the vehicle dynamics properly, especially useful when real experimental data are missing. The fundamental regularization terms and constraints of the controller are tuned to reduce the rate of change of the inputs while assuring an acceptable velocity and path tracking. The motion planning strategy consists of a Fren\'et-Frame-based planner which considers a forecast of the opponent produced by a Kalman filter. The planner chooses the collision-free path and velocity profile to be tracked on a 3 seconds horizon to realize different goals such as following and overtaking. The proposed solution has been applied on a Dallara AV-21 racecar and tested at oval race tracks achieving lateral accelerations up to 25 m/s2m/s^{2}.Comment: Accepted to the 25th IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC 2022

    er.autopilot 1.0: The Full Autonomous Stack for Oval Racing at High Speeds

    Full text link
    The Indy Autonomous Challenge (IAC) brought together for the first time in history nine autonomous racing teams competing at unprecedented speed and in head-to-head scenario, using independently developed software on open-wheel racecars. This paper presents the complete software architecture used by team TII EuroRacing (TII-ER), covering all the modules needed to avoid static obstacles, perform active overtakes and reach speeds above 75 m/s (270 km/h). In addition to the most common modules related to perception, planning, and control, we discuss the approaches used for vehicle dynamics modelling, simulation, telemetry, and safety. Overall results and the performance of each module are described, as well as the lessons learned during the first two events of the competition on oval tracks, where the team placed respectively second and third.Comment: Preprint: Accepted to Field Robotics "Opportunities and Challenges with Autonomous Racing" Special Issu

    Analisi esaustiva di DAG task: soluzioni per moderni sistemi real-time embedded

    No full text
    I moderni sistemi cyber-fisici embedded integrano diverse funzionalità complesse che sono soggette a stringenti vincoli temporali. Purtroppo, i tradizionali modelli di task sequenziali e le soluzioni per uniprocessori non possono essere applicati in questo contesto: diventa necessario un modello più espressivo. In questo scenario, il Directed Acyclic Graph (DAG) è un modello adatto a esprimere la complessità e il parallelismo dei task di questo tipo di sistemi. Negli ultimi anni sono stati proposti diversi metodi, con diverse configurazioni, per risolvere il problema della schedulabilità di applicazioni con DAG tasks. Tuttavia, rimangono ancora molti problemi aperti. Oltre alla schedulabilità, aspetti come la freschezza dei dati o la reazione a un evento sono cruciali per le prestazioni di questo tipo di sistemi. Per esempio, una tipica applicazione nel campo automobilistico è costituita dal rilevamento dell'ambiente, dalla pianificazione e dall'attuazione basata sui dati elaborati. La latenza del controllo end-to-end è quindi decisiva, e può diventare molto complicata in scenari reali. Questa tesi rappresenta uno sforzo in entrambe le direzioni: (i) la schedulabilità di DAG tasks su un multiprocessore, e (ii) la supervisione della latenza end-to-end per task a multi-frequenza. Per il primo problema, viene presentata un'indagine sullo stato dell'arte del modello di task a grafico aciclico diretto, con particolare attenzione ai test più efficaci, facili da implementare e da adottare. Per quanto riguarda il secondo, viene proposto un metodo per convertire un task-set di DAG a multi-frequenza con vincoli temporali in un DAG a singola frequenza che ottimizza la schedulabilità e la latenza end-to-end.Modern cyber-physical embedded systems integrate several complex functionalities that are subject to tight timing constraints. Unfortunately, traditional sequential task models and uniprocessors solutions can not be applied in this context: a more expressive model becomes necessary. In this scenario, the Directed Acyclic Graph (DAG) is a suitable model to express the complexity and the parallelism of the tasks of these kinds of systems. In recent years, several methods with different settings have been proposed to solve the schedulability problem for applications featuring DAG tasks. However, there are still many open problems left. Besides schedulability, aspects like the freshness of data or reaction to an event are crucial for the performance of those kind of systems. For example, a typical application in the automotive field is composed of sensing the environment, planning, and actuate consequently to the elaborated data. Control end-to-end latency is then decisive, and it can get very complicated in real scenarios. This thesis represents an effort in both directions: (i) the schedulability of a DAG task on a multiprocessor, and (ii) the supervision of end-to-end latency for multi-rate tasks. For the former problem, a survey of the state-of-the-art of the Directed Acyclic Graph task model is presented, with a focus on scheduling tests that are more effective, easy to implement, and adopt. Regarding the latter, a method is proposed to convert a multi-rate DAG task-set with timing constraints into a single-rate DAG that optimizes schedulability, age, and reaction latency

    A Novel Real-Time Edge-Cloud Big Data Management and Analytics Framework for Smart Cities

    No full text
    Exposing city information to dynamic, distributed, powerful, scalable, and user-friendly big data systems is expected to enable the implementation of a wide range of new opportunities; however, the size, heterogeneity and geographical dispersion of data often makes it difficult to combine, analyze and consume them in a single system. In the context of the H2020 CLASS project, we describe an innovative framework aiming to facilitate the design of advanced big-data analytics workflows. The proposal covers the whole compute continuum, from edge to cloud, and relies on a well-organized distributed infrastructure exploiting: a) edge solutions with advanced computer vision technologies enabling the real-time generation of “rich” data from a vast array of sensor types; b) cloud data management techniques offering efficient storage, real-time querying and updating of the high-frequency incoming data at different granularity levels. We specifically focus on obstacle detection and tracking for edge processing, and consider a traffic density monitoring application, with hierarchical data aggregation features for cloud processing; the discussed techniques will constitute the groundwork enabling many further services. The tests are performed on the real use-case of the Modena Automotive Smart Area (MASA)

    Biomedical Image Classification via Dynamically Early Stopped Artificial Neural Network

    No full text
    It is well known that biomedical imaging analysis plays a crucial role in the healthcare sector and produces a huge quantity of data. These data can be exploited to study diseases and their evolution in a deeper way or to predict their onsets. In particular, image classification represents one of the main problems in the biomedical imaging context. Due to the data complexity, biomedical image classification can be carried out by trainable mathematical models, such as artificial neural networks. When employing a neural network, one of the main challenges is to determine the optimal duration of the training phase to achieve the best performance. This paper introduces a new adaptive early stopping technique to set the optimal training time based on dynamic selection strategies to fix the learning rate and the mini-batch size of the stochastic gradient method exploited as the optimizer. The numerical experiments, carried out on different artificial neural networks for image classification, show that the developed adaptive early stopping procedure leads to the same literature performance while finalizing the training in fewer epochs. The numerical examples have been performed on the CIFAR100 dataset and on two distinct MedMNIST2D datasets which are the large-scale lightweight benchmark for biomedical image classification
    corecore